Search for a command to run...
In this podcast episode, Nathan Lambert and Sebastian Raschka discuss the state of AI in 2026, exploring advancements in large language models, scaling laws, tool use, open-source models, post-training techniques, and the broader implications of AI for human civilization.
Sebastian Raschka provides an in-depth exploration of the LLM landscape in 2026, highlighting key developments in post-training techniques like RLVR and GRPO, inference scaling, tool use, and the ongoing importance of transformer architectures with incremental improvements.